Recently, knowledge graph embedding methods have attracted numerous researchersâ?? interest due to their outstanding effectiveness\nand robustness in knowledge representation. However, there are still some limitations in the existing methods.On the one\nhand, translation-based representation models focus on conceiving translation principles to represent knowledge from a global\nperspective, while they fail to learn various types of relational facts discriminatively. It is prone to make the entity congestion of\ncomplex relational facts in the embedding space reducing the precision of representation vectors associating with entities. On the\nother hand, parallel subgraphs extracted from the original graph are used to learn local relational facts discriminatively. However,\nit probably causes the relational fact damage of the original knowledge graph to some degree during the subgraph extraction. Thus,\nprevious methods are unable to learn local and global knowledge representation uniformly. To that end, we propose a multiview\ntranslation learning model, named MvTransE, which learns relational facts from global-view and local-view perspectives, respectively.\nSpecifically, we first construct multiple parallel subgraphs from an original knowledge graph by considering entity\nsemantic and structural features simultaneously. Then, we embed the original graph and construct subgraphs into the corresponding\nglobal and local feature spaces. Finally, we propose a multiview fusion strategy to integrate multiview representations of\nrelational facts. Extensive experiments on four public datasets demonstrate the superiority of our model in knowledge graph\nrepresentation tasks compared to state-of-the-art methods.
Loading....